So
we're integrating knowledge into learning.
And the price for that, of course, is that we have to use logic, because that's the only
real and with real I mean compositional way of representing knowledge in learning algorithms.
We started out with what we call explanation based learning.
You basically try to understand explanations or worked examples.
And the idea here is that you get the explanation from trying to prove what you're seeing using
the background knowledge.
And our example has been trying to understand differentiation.
In particular, doing simple stuff like simplification.
That was really our example.
And the idea here is that you generate an explanation by yourself that explains the
observation.
Namely that one, in this case, one times zero plus x simplifies to something.
Then you generate a proof and then you try to generalize the explanation.
And you do that by just running exactly the same proof, the same explanation on a variableized
version of the observation.
But actually you have instantiations here, one and zero and so on, which in this proof
are what you had in the example.
But if you rerun the proof, some of these things actually get forced on you by instantiating
the variables.
And what you end up with is if you collect the leaves of the proof, that actually gives
you in the usual way, if you assume the leaves, you get the observation.
Form of cheating.
You get a formula like this.
And now you can realize that this thing here is actually true irrespective of z or for
all z, just from the background knowledge, which tells you that actually you're left
over with this kind of a more general rule.
So that's the rule we've learned from that example, by generating our own explanation,
rerunning it, and then pruning the tree.
There's a couple of things we can do here.
We can, instead of just collecting the leaves here, we can collect any roots of sub-trays.
In this case, the prim is actually a much better choice because it's more general than
Arithvar and that gives us different ways of learning new rules.
You want to learn rules that where the preconditions are what we say operational, namely you can
check them easily without a lot of inference.
And you want to optimize the size of the proof that's left over because that's actually the
steps, the macro steps you're going to get by these more general rules.
And that gives you better efficiency, better efficiency by these macro steps.
And in a way, if you think back to last semester, where we actually had this concept of derived
inference rules, where we had this idea that, oh, instead of expanding the definition of
implies over and and or and so on into something, having a huge proof sub-tree we could abbreviate,
we're doing exactly the same thing only that we have an algorithm that actually in itself
comes up with these derived rules of inference.
We've done that with natural intelligence last semester.
We're doing it with machine learning now.
And of course, the scales and possibly goes beyond what we humans are used to because
we can do it systematically.
Any questions about explanation based learning?
The name doesn't come from somebody explains it to you, but you explain it to yourself
Presenters
Zugänglich über
Offener Zugang
Dauer
01:30:05 Min
Aufnahmedatum
2018-07-05
Hochgeladen am
2018-07-05 23:04:28
Sprache
en-US
Der Kurs baut auf der Vorlesung Künstliche Intelligenz I vom Wintersemester auf und führt diese weiter.
Lernziele und Kompetenzen
Fach- Lern- bzw. Methodenkompetenz
-
Wissen: Die Studierenden lernen grundlegende Repräsentationsformalismen und Algorithmen der Künstlichen Intelligenz kennen.
-
Anwenden: Die Konzepte werden an Beispielen aus der realen Welt angewandt (bungsaufgaben).
-
Analyse: Die Studierenden lernen über die Modellierung in der Maschine menschliche Intelligenzleistungen besser einzuschätzen.